237 research outputs found

    Low-Density Code-Domain NOMA: Better Be Regular

    Full text link
    A closed-form analytical expression is derived for the limiting empirical squared singular value density of a spreading (signature) matrix corresponding to sparse low-density code-domain (LDCD) non-orthogonal multiple-access (NOMA) with regular random user-resource allocation. The derivation relies on associating the spreading matrix with the adjacency matrix of a large semiregular bipartite graph. For a simple repetition-based sparse spreading scheme, the result directly follows from a rigorous analysis of spectral measures of infinite graphs. Turning to random (sparse) binary spreading, we harness the cavity method from statistical physics, and show that the limiting spectral density coincides in both cases. Next, we use this density to compute the normalized input-output mutual information of the underlying vector channel in the large-system limit. The latter may be interpreted as the achievable total throughput per dimension with optimum processing in a corresponding multiple-access channel setting or, alternatively, in a fully-symmetric broadcast channel setting with full decoding capabilities at each receiver. Surprisingly, the total throughput of regular LDCD-NOMA is found to be not only superior to that achieved with irregular user-resource allocation, but also to the total throughput of dense randomly-spread NOMA, for which optimum processing is computationally intractable. In contrast, the superior performance of regular LDCD-NOMA can be potentially achieved with a feasible message-passing algorithm. This observation may advocate employing regular, rather than irregular, LDCD-NOMA in 5G cellular physical layer design.Comment: Accepted for publication in the IEEE International Symposium on Information Theory (ISIT), June 201

    Joint Interference Alignment and Bi-Directional Scheduling for MIMO Two-Way Multi-Link Networks

    Full text link
    By means of the emerging technique of dynamic Time Division Duplex (TDD), the switching point between uplink and downlink transmissions can be optimized across a multi-cell system in order to reduce the impact of inter-cell interference. It has been recently recognized that optimizing also the order in which uplink and downlink transmissions, or more generally the two directions of a two-way link, are scheduled can lead to significant benefits in terms of interference reduction. In this work, the optimization of bi-directional scheduling is investigated in conjunction with the design of linear precoding and equalization for a general multi-link MIMO two-way system. A simple algorithm is proposed that performs the joint optimization of the ordering of the transmissions in the two directions of the two-way links and of the linear transceivers, with the aim of minimizing the interference leakage power. Numerical results demonstrate the effectiveness of the proposed strategy.Comment: To be presented at ICC 2015, 6 pages, 7 figure

    Quantum correlations of twophoton polarization states in the parametric down-conversion process

    Full text link
    We consider correlation properties of twophoton polarization states in the parametric down-conversion process. In our description of polarization states we take into account the simultaneous presence of colored and white noise in the density matrix. Within the considered model we study the dependence of the von Neumann entropy on the noise amount in the system and derive the separability condition for the density matrix of twophoton polarization state, using Perec-Horodecki criterion and majorization criterion. Then the dependence of the Bell operator (in CHSH form) on noise is studied. As a result, we give a condition for determining the presence of quantum correlation states in experimental measurements of the Bell operator. Finally, we compare our calculations with experimental data [doi:10.1103/PhysRevA.73.062110] and give a noise amount estimation in the photon polarization state considered there.Comment: 10 pages, 7 figures; corrected typo

    Cooperative Multi-Cell Networks: Impact of Limited-Capacity Backhaul and Inter-Users Links

    Full text link
    Cooperative technology is expected to have a great impact on the performance of cellular or, more generally, infrastructure networks. Both multicell processing (cooperation among base stations) and relaying (cooperation at the user level) are currently being investigated. In this presentation, recent results regarding the performance of multicell processing and user cooperation under the assumption of limited-capacity interbase station and inter-user links, respectively, are reviewed. The survey focuses on related results derived for non-fading uplink and downlink channels of simple cellular system models. The analytical treatment, facilitated by these simple setups, enhances the insight into the limitations imposed by limited-capacity constraints on the gains achievable by cooperative techniques

    Throughput Scaling of Wireless Networks With Random Connections

    Full text link
    This work studies the throughput scaling laws of ad hoc wireless networks in the limit of a large number of nodes. A random connections model is assumed in which the channel connections between the nodes are drawn independently from a common distribution. Transmitting nodes are subject to an on-off strategy, and receiving nodes employ conventional single-user decoding. The following results are proven: 1) For a class of connection models with finite mean and variance, the throughput scaling is upper-bounded by O(n1/3)O(n^{1/3}) for single-hop schemes, and O(n1/2)O(n^{1/2}) for two-hop (and multihop) schemes. 2) The Θ(n1/2)\Theta (n^{1/2}) throughput scaling is achievable for a specific connection model by a two-hop opportunistic relaying scheme, which employs full, but only local channel state information (CSI) at the receivers, and partial CSI at the transmitters. 3) By relaxing the constraints of finite mean and variance of the connection model, linear throughput scaling Θ(n)\Theta (n) is achievable with Pareto-type fading models.Comment: 13 pages, 4 figures, To appear in IEEE Transactions on Information Theor

    Bayesian Active Meta-Learning for Reliable and Efficient AI-Based Demodulation

    Get PDF
    Two of the main principles underlying the life cycle of an artificial intelligence (AI) module in communication networks are adaptation and monitoring. Adaptation refers to the need to adjust the operation of an AI module depending on the current conditions; while monitoring requires measures of the reliability of an AI module's decisions. Classical frequentist learning methods for the design of AI modules fall short on both counts of adaptation and monitoring, catering to one-off training and providing overconfident decisions. This paper proposes a solution to address both challenges by integrating meta-learning with Bayesian learning. As a specific use case, the problems of demodulation and equalization over a fading channel based on the availability of few pilots are studied. Meta-learning processes pilot information from multiple frames in order to extract useful shared properties of effective demodulators across frames. The resulting trained demodulators are demonstrated, via experiments, to offer better calibrated soft decisions, at the computational cost of running an ensemble of networks at run time. The capacity to quantify uncertainty in the model parameter space is further leveraged by extending Bayesian meta-learning to an active setting. In it, the designer can select in a sequential fashion channel conditions under which to generate data for meta-learning from a channel simulator. Bayesian active meta-learning is seen in experiments to significantly reduce the number of frames required to obtain efficient adaptation procedure for new frames.Comment: To appear in IEEE Transactions on Signal Processin

    Calibrating AI Models for Few-Shot Demodulation via Conformal Prediction

    Full text link
    AI tools can be useful to address model deficits in the design of communication systems. However, conventional learning-based AI algorithms yield poorly calibrated decisions, unabling to quantify their outputs uncertainty. While Bayesian learning can enhance calibration by capturing epistemic uncertainty caused by limited data availability, formal calibration guarantees only hold under strong assumptions about the ground-truth, unknown, data generation mechanism. We propose to leverage the conformal prediction framework to obtain data-driven set predictions whose calibration properties hold irrespective of the data distribution. Specifically, we investigate the design of baseband demodulators in the presence of hard-to-model nonlinearities such as hardware imperfections, and propose set-based demodulators based on conformal prediction. Numerical results confirm the theoretical validity of the proposed demodulators, and bring insights into their average prediction set size efficiency.Comment: Submitted for a conference publicatio

    Educational needs of midwife alumni work in health care centers

    Get PDF
    Abstract Aims: Determination of educational needs is the first step in educational planning and the first factor of ensuring the quality and efficacy of education process. Midwives’ sufficient knowledge and improvement of their decision-making will lead to performance progress. The aim of this study was determining the educational needs of midwives working in hospitals and healthcare centers of Chaharmahal & Bakhtiari province. Methods: This cross-sectional study was performed on 280 midwives and 50 healthcare center authorities of hospitals and healthcare centers of Chaharmahal & Bakhtiari who were selected by census sampling method in 2009. Data was collected by a researcher-made questionnaire containing three sections of demographic characteristics, educational needs related to their specialty or general domains and priority in educational needs. Data were analyzed by descriptive statistics and Chi-square, student T-test and one-way ANOVA using SPSS 15 software. Results: There wasn’t significant difference in the average scores of educational needs in specific and general domains from authorities and midwives’ point of view (p>0.05). There was a significant relationship between the average score of educational needs and work place in obstetrics (p=0.002), maternal and child health (p=0.038) and neonatal (p=0.025) domains. There was a significant relationship between the average score of educational needs and the academic level of education in general domains (p=0.025). Conclusion: Holding educational classes of English, use of Information Technology (IT) in obstetrics, resuscitation, research methodology, religious and legal commandments, abnormal uterine bleeding, hypertensive disorders, neonatal medical treatment and common gynecologic infections seems essential as educational priorities. Keywords: Midwife, Hea
    corecore